Parallelism and Compilers
نویسنده
چکیده
syntax tree, 76AC, 222access descriptor, 104activity block of a group, 122activity region of a group, 122, 137ADD concept, 87alive nodes, 42alloc function, 145ANSI C, 124ASSIGN concept, 88async type qualifier, 133asynchronous access, 257asynchronous execution mode, 132, 133asynchronous function, 126, 133, 136Asynchronous PRAM, 259asynchronous program region, 132, 133, 136atomic operation, 147atomic_add, 260automatic data layout, 77automatic parallelization, 77automatic performance prediction, 77available processors, 253 backward substitution, 89barrier statement, 146basic block, 22, 192basic concepts, 84bisector, 61BSPlib, 263bucket sort, 100bulk array access, 103 Cilk, 222clause, 109, 111CM-Fortran, 223code motion, 99combine phase, 210combine strategy, 210complete search algorithm, 30CON concept, 87concept, 76concept comprehension, 76concept idiom, 76concept instance, 76concept instance list, 107concept instance parameters, 76concept instance slot, 76concept occurrence, 76cone, 23constant constructor concepts, 84, 87context switch, 253contiguous schedule, 26control-synchronicity, 192control-synchronous, 192COO, 79coordinate storage format, 79critical section, 147cross edge, 93, 103cross-processor data dependence, 252cross-processor data dependence graph, 262CSL, 115CSP, 221CSR, 80CUR, 80 DAG (directed acyclic graph), 22data dependencecross-processor, 252data parallelism, 180, 223, 226data structure replacement, 78, 83dataparallel code, 261deadlock, 149debugging support, 78debugging support in CSL, 114
منابع مشابه
The Potential of Exploiting Coarse-Grain Task Parallelism from Sequential Programs
Research into automatic extraction of instruction-level parallelism and data parallelism from sequential languages by compilers has been going on for many years. However, task parallelism has been almost unexploited by parallelizing compilers. It has been shown that coarse-grain task parallelism is a useful additional resource of parallelism for multiprocessors, but the simple and restricted ex...
متن کاملThe Impact of Data Communication and Control Synchronization on Coarse-Grain Task Parallelism
Research into automatic extraction of instruction-level parallelism and data parallelism from sequential languages by compilers has been going on for many years. However, task parallelism has been almost unexploited by parallelizing compilers. It has been shown that coarse-grain task parallelism is a useful additional resource of parallelism for multiprocessors, but the simple and restricted ex...
متن کاملMachine-Independent Evaluation of Parallelizing Compilers
A method is presented for measuring the degree of success of a compiler at extracting implicit parallelism. The outcome of applying this method to evaluate a state-of-the-art par-allelizer, KAP/Concurrent, using the Perfect Benchmarks and a few linear-algebra routines indicates that there is much room for improvement in the current generation of parallelizing compilers.
متن کاملExploiting Locality and Parallelism in Pointer-based Programs
While powerful optimization techniques are currently available for limited automatic compilation domains, such as dense array-based scientific and engineering numerical codes, a similar level of success has eluded general-purpose programs, specially symbolic and pointer-based codes. Current compilers are not able to successfully deal with parallelism in those codes. Based on our previously deve...
متن کاملSpeculative Thread Execution in a Multithreaded Dataflow Architecture
Instruction Level Parallelism (ILP) in modern Superscalar and VLIW processors is achieved using out-of-order execution, branch predictions, value predictions, and speculative executions of instructions. These techniques are not scalable. This has led to multithreading and multi-core systems. However, such processors require compilers to automatically extract thread level or task level paralleli...
متن کاملSupport and Efficiency of Nested Parallelism in OpenMP Implementations
Nested parallelism has been a major feature of OpenMP since its very beginnings. As a programming style, it provides an elegant solution for a wide class of parallel applications, with the potential to achieve substantial utilization of the available computational resources, in situations where outer-loop parallelism simply can not. Notwithstanding its significance, nested parallelism support w...
متن کامل